-
Notifications
You must be signed in to change notification settings - Fork 38
Make eessi configure gpu node automatically #841
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
fbb37b9 to
7e7cb24
Compare
gpu node automatically
7e7cb24 to
2c45c6c
Compare
gpu node automatically
2c45c6c to
fa78672
Compare
| name: basic_users | ||
| when: enable_basic_users | ||
|
|
||
| - name: EESSI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should replace this whole EESSI block with running the configure task directly. But it is not obvious how to do this TBF ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried to describe it in https://wiki.stackhpc.com/doc/slurm-development-ZXjBRByl6K#h-only-inventory-vars
so you could change the entire eeesi thing to do this.
| cmd: "cvmfs_config setup" | ||
|
|
||
| # configure gpus | ||
| - name: Check for NVIDIA driver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure whether there is always a /dev/nvidia0? Could you check with @jovial please for e.g MIG and vGPU configs? Else we'd have to do something like https://github.com/stackhpc/ansible-role-openhpc/blob/be6196540ca8007a0e45f2c3b2596ed0ff77fc13/library/gpu_info.py#L42 but TBH this approach here is much simpler!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in a slurm appliance, but on an openstack compute host with mig (it's hard for me to login into the slurm deployment with mig, but @priteau could check there):
[stack@gpu3 ~]$ ls -lia /dev/nvidia*
4226 crw-rw-rw-. 1 root root 503, 3 Feb 19 2025 /dev/nvidia-vgpu3
3610 crw-rw-rw-. 1 root root 503, 4 Feb 11 2025 /dev/nvidia-vgpu4
3665 crw-rw-rw-. 1 root root 503, 5 Feb 11 2025 /dev/nvidia-vgpu5
2205 crw-rw-rw-. 1 root root 503, 8 Jan 23 2025 /dev/nvidia-vgpu8
2091 crw-rw-rw-. 1 root root 503, 0 Jan 23 2025 /dev/nvidia-vgpuctl
2082 crw-rw-rw-. 1 root root 195, 0 Jan 23 2025 /dev/nvidia0
2086 crw-rw-rw-. 1 root root 195, 1 Jan 23 2025 /dev/nvidia1
2080 crw-rw-rw-. 1 root root 195, 255 Jan 23 2025 /dev/nvidiactl
So checking for /dev/nvidia0 looks good to me. You will likely not have the nvidia-vgpu* device nodes in a slurm deployment as they only appear if you create a vgpu instance.
| cmd: "cvmfs_config setup" | ||
|
|
||
| # configure gpus | ||
| - name: Check for NVIDIA driver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strictly, this is checking for the device, which is only present if the driver is loaded. So I suggest:
| - name: Check for NVIDIA driver | |
| - name: Check for NVIDIA GPU |
| path: /dev/nvidia0 | ||
| register: nvidia_driver | ||
|
|
||
| - name: Set fact if NVIDIA driver is present |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - name: Set fact if NVIDIA driver is present | |
| - name: Set fact if NVIDIA GPU is present |
…pliance into configure-gpus Apply changes to task names
Add tasks to
eessi/configure.ymlandcompute-init.ymlto run the EESSIlink_nvidia_host_libraries.shscript on gpu nodes with nvidia drivers installed. The tasks will be run when eithersite.ymlis run or a rebuild via slurm is completed.